feat(llm): add Kilo Gateway provider support#189
feat(llm): add Kilo Gateway provider support#189skulldogged wants to merge 4 commits intospacedriveapp:mainfrom
Conversation
WalkthroughThis PR introduces comprehensive support for Kilo Gateway, a new OpenAI-compatible multi-provider LLM gateway. Changes span documentation, configuration system, API type definitions, UI components, and LLM execution layers, enabling the provider across all major components. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
interface/src/components/ModelSelect.tsx (1)
149-152:⚠️ Potential issue | 🟡 Minor
?? 99fallback is dead code — unknown providers sort to the top, not bottom.
Array.prototype.indexOfreturns-1(notnull/undefined) for missing items, so the nullish coalescing?? 99never fires. Any provider not inproviderOrdergets index-1, causing it to sort before all listed providers rather than after them. The fix is to use|| 99(which catches -1) or an explicit sentinel:🐛 Proposed fix
const sortedProviders = Object.keys(grouped).sort( (a, b) => - (providerOrder.indexOf(a) ?? 99) - (providerOrder.indexOf(b) ?? 99), + (providerOrder.indexOf(a) === -1 ? 99 : providerOrder.indexOf(a)) - + (providerOrder.indexOf(b) === -1 ? 99 : providerOrder.indexOf(b)), );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@interface/src/components/ModelSelect.tsx` around lines 149 - 152, The comparator for sortedProviders uses (providerOrder.indexOf(a) ?? 99) which never fires because indexOf returns -1 for missing values, so unknown providers sort to the top; update the comparator to treat -1 as a sentinel (e.g. compute const ia = providerOrder.indexOf(a); const ib = providerOrder.indexOf(b); return (ia === -1 ? 99 : ia) - (ib === -1 ? 99 : ib)) so unknown providers sort after known ones; change the logic around the providerOrder.indexOf calls in the sortedProviders sort comparator to use this -1 check instead of nullish coalescing.src/config.rs (1)
2289-2296:⚠️ Potential issue | 🟡 MinorAll API key environment variables in
load_from_env()silently dropVarError::NotUnicodeinstead of explicit handling.The
.ok()pattern here and in the fallback at lines 2668–2675 swallows non-UTF8 env values without surfacing them. This contradicts the coding guideline: "Don't silently discard errors; no let _ = on Results. Handle, log, or propagate errors."While rare, misconfigured env vars become undiagnosable. Note that
load_from_toml()already usesresolve_env_value()for the primary lookup—apply consistent error handling toload_from_env()as well, covering all API keys (anthropic, openai, openrouter, kilo, zhipu, groq, together, fireworks, deepseek, xai, mistral, gemini, ollama, ollama_base_url).🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/config.rs` around lines 2289 - 2296, In load_from_env(), do not call .ok() on std::env::var for the various LLM API keys (the LlmConfig fields: anthropic_key, openai_key, openrouter_key, kilo_key, zhipu_key and the other keys mentioned) because that silently drops VarError::NotUnicode; instead, perform explicit handling: use std::env::var and match its Result, convert or surface VarError::NotUnicode via the existing resolve_env_value() helper (or log/propagate a clear error) so non-UTF8 env values are not swallowed; apply this change consistently for all API key lookups (including groq, together, fireworks, deepseek, xai, mistral, gemini, ollama, ollama_base_url and the anthropic fallback logic) so errors are logged or returned rather than ignored.
🧹 Nitpick comments (3)
src/api/models.rs (1)
104-119:direct_provider_mapping("kilo")is dead code — kilo models need to be inextra_models()instead.Kilo is a gateway (not a base model provider), so models.dev almost certainly doesn't list a
"kilo"provider. Thismatcharm will never fire. More critically, sinceextra_models()has no kilo entries,GET /api/models?provider=kiloreturns an empty list, leaving the model picker in the Kilo config dialog blank — users must type the model ID manually, unlike every other provider.Known Kilo gateway model IDs follow the routing convention already used elsewhere in this PR (
kilo/anthropic/claude-sonnet-4.5,kilo/anthropic/claude-haiku-4.5, etc.). These should be seeded inextra_models().♻️ Proposed fix
- "kilo" => Some("kilo"),And in
extra_models():+ // Kilo Gateway + ModelInfo { + id: "kilo/anthropic/claude-sonnet-4.5".into(), + name: "Claude Sonnet 4.5".into(), + provider: "kilo".into(), + context_window: Some(200_000), + tool_call: true, + reasoning: false, + input_audio: false, + }, + ModelInfo { + id: "kilo/anthropic/claude-haiku-4.5".into(), + name: "Claude Haiku 4.5".into(), + provider: "kilo".into(), + context_window: Some(200_000), + tool_call: true, + reasoning: false, + input_audio: false, + },🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/api/models.rs` around lines 104 - 119, The "kilo" arm in direct_provider_mapping is dead and prevents Kilo gateway models from being exposed; remove the "kilo" => Some("kilo") match arm from direct_provider_mapping and instead add the known Kilo gateway model IDs to the extra_models() seed list (e.g., entries like "kilo/anthropic/claude-sonnet-4.5", "kilo/anthropic/claude-haiku-4.5" or other Kilo gateway IDs used elsewhere in the repo) so GET /api/models?provider=kilo returns those models and the Kilo config picker is populated; update extra_models() to include these new strings and ensure provider filtering logic recognizes "kilo" as a provider for those IDs.src/llm/model.rs (1)
746-757: UnreachableApiType::KiloGatewayarm incall_openai_compatible.
attempt_completiondispatchesApiType::KiloGatewayexclusively tocall_openai_compatible_with_optional_auth, so theKiloGatewaybranch in this function'sendpoint_pathmatch can never execute. Remove it to keep the match consistent with actual routing.🧹 Proposed cleanup
- ApiType::OpenAiChatCompletions | ApiType::Gemini | ApiType::KiloGateway => { + ApiType::OpenAiChatCompletions | ApiType::Gemini => { "/chat/completions" }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/llm/model.rs` around lines 746 - 757, The match in call_openai_compatible that sets endpoint_path includes an unreachable ApiType::KiloGateway arm; remove KiloGateway from that match so the arms reflect only the ApiType variants actually routed here (e.g., OpenAiCompletions, OpenAiResponses, OpenAiChatCompletions, Gemini, Anthropic handling remains as-is). Update the match to no longer list KiloGateway and ensure any logic that depends on provider_config.api_type in call_openai_compatible remains consistent with call_openai_compatible_with_optional_auth dispatching KiloGateway elsewhere.README.md (1)
187-187:api_typecomment omitsgeminiandkilo_gatewayvalues.The inline comment lists four of the six valid
api_typevalues. Sinceconfig.mdxnow documents all six, this example comment could point readers to the full set or at least includekilo_gatewayfor consistency with what this PR introduces.📝 Suggested update
-api_type = "openai_completions" # or "openai_chat_completions", "openai_responses", "anthropic" +api_type = "openai_completions" # or "openai_chat_completions", "openai_responses", "anthropic", "gemini", "kilo_gateway"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` at line 187, The inline comment for the api_type example is missing the "gemini" and "kilo_gateway" options; update the comment attached to the api_type example so it lists all valid values (e.g., "openai_completions", "openai_chat_completions", "openai_responses", "anthropic", "gemini", "kilo_gateway") or change it to point readers to the full set in config.mdx; locate the api_type example in README.md and modify its trailing comment accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/content/docs/`(configuration)/config.mdx:
- Line 177: Update the sentence that lists implicit env fallbacks for LLM keys
to include all 12 environment variables referenced by the [llm] configuration
block: ANTHROPIC_API_KEY, OPENAI_API_KEY, OPENROUTER_API_KEY, KILO_API_KEY,
ZHIPU_API_KEY, GROQ_API_KEY, TOGETHER_API_KEY, FIREWORKS_API_KEY,
DEEPSEEK_API_KEY, XAI_API_KEY, MISTRAL_API_KEY, and OPENCODE_ZEN_API_KEY so the
docs match the implicit fallback behavior implemented in the [llm] config
parsing (see the [llm] configuration block and its environment lookups).
---
Outside diff comments:
In `@interface/src/components/ModelSelect.tsx`:
- Around line 149-152: The comparator for sortedProviders uses
(providerOrder.indexOf(a) ?? 99) which never fires because indexOf returns -1
for missing values, so unknown providers sort to the top; update the comparator
to treat -1 as a sentinel (e.g. compute const ia = providerOrder.indexOf(a);
const ib = providerOrder.indexOf(b); return (ia === -1 ? 99 : ia) - (ib === -1 ?
99 : ib)) so unknown providers sort after known ones; change the logic around
the providerOrder.indexOf calls in the sortedProviders sort comparator to use
this -1 check instead of nullish coalescing.
In `@src/config.rs`:
- Around line 2289-2296: In load_from_env(), do not call .ok() on std::env::var
for the various LLM API keys (the LlmConfig fields: anthropic_key, openai_key,
openrouter_key, kilo_key, zhipu_key and the other keys mentioned) because that
silently drops VarError::NotUnicode; instead, perform explicit handling: use
std::env::var and match its Result, convert or surface VarError::NotUnicode via
the existing resolve_env_value() helper (or log/propagate a clear error) so
non-UTF8 env values are not swallowed; apply this change consistently for all
API key lookups (including groq, together, fireworks, deepseek, xai, mistral,
gemini, ollama, ollama_base_url and the anthropic fallback logic) so errors are
logged or returned rather than ignored.
---
Nitpick comments:
In `@README.md`:
- Line 187: The inline comment for the api_type example is missing the "gemini"
and "kilo_gateway" options; update the comment attached to the api_type example
so it lists all valid values (e.g., "openai_completions",
"openai_chat_completions", "openai_responses", "anthropic", "gemini",
"kilo_gateway") or change it to point readers to the full set in config.mdx;
locate the api_type example in README.md and modify its trailing comment
accordingly.
In `@src/api/models.rs`:
- Around line 104-119: The "kilo" arm in direct_provider_mapping is dead and
prevents Kilo gateway models from being exposed; remove the "kilo" =>
Some("kilo") match arm from direct_provider_mapping and instead add the known
Kilo gateway model IDs to the extra_models() seed list (e.g., entries like
"kilo/anthropic/claude-sonnet-4.5", "kilo/anthropic/claude-haiku-4.5" or other
Kilo gateway IDs used elsewhere in the repo) so GET /api/models?provider=kilo
returns those models and the Kilo config picker is populated; update
extra_models() to include these new strings and ensure provider filtering logic
recognizes "kilo" as a provider for those IDs.
In `@src/llm/model.rs`:
- Around line 746-757: The match in call_openai_compatible that sets
endpoint_path includes an unreachable ApiType::KiloGateway arm; remove
KiloGateway from that match so the arms reflect only the ApiType variants
actually routed here (e.g., OpenAiCompletions, OpenAiResponses,
OpenAiChatCompletions, Gemini, Anthropic handling remains as-is). Update the
match to no longer list KiloGateway and ensure any logic that depends on
provider_config.api_type in call_openai_compatible remains consistent with
call_openai_compatible_with_optional_auth dispatching KiloGateway elsewhere.
ℹ️ Review info
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (14)
README.mddocs/content/docs/(configuration)/config.mdxdocs/content/docs/(deployment)/roadmap.mdxdocs/content/docs/(getting-started)/quickstart.mdxinterface/src/api/client.tsinterface/src/components/ModelSelect.tsxinterface/src/lib/providerIcons.tsxinterface/src/routes/Settings.tsxsrc/api/models.rssrc/api/providers.rssrc/config.rssrc/llm/model.rssrc/llm/providers.rssrc/llm/routing.rs
Summary
This PR adds first-class support for Kilo Gateway across config, routing, API provider management, model catalog mapping, and the Settings UI.
Changes
kilo_keyandKILO_API_KEYsupport in config, with automatic provider registration askilo(https://api.kilo.ai/api/gateway)channel/branch:kilo/anthropic/claude-sonnet-4.5worker/compactor/cortex:kilo/anthropic/claude-haiku-4.5kilo/...) in provider routing helpers/chat/completions(instead of/v1/chat/completions)HTTP-Referer: https://github.com/spacedriveapp/spacebotX-Title: spacebot/api/providers,/api/models)Why
Kilo is OpenAI-compatible but uses a different chat completions path shape than providers that use
/v1/chat/completions. This makes Kilo work out of the box in Spacebot with proper routing and setup support.Note
Comprehensive integration of Kilo Gateway support with 14 files modified. Changes span config handling, provider routing (with Claude Sonnet 4.5 for channels and Claude Haiku 4.5 for workers), OpenAI-compatible API support with custom headers, model discovery endpoints, and UI components. Documentation updated across quickstart, config guide, and roadmap to reflect Kilo among supported providers (now 12 total).
Written by Tembo for commit 064b922. This will update automatically on new commits.